Goto

Collaborating Authors

 reservoir neuron






Reservoir Network with Structural Plasticity for Human Activity Recognition

Zyarah, Abdullah M., Abdul-Hadi, Alaa M., Kudithipudi, Dhireesha

arXiv.org Artificial Intelligence

--The unprecedented dissemination of edge devices is accompanied by a growing demand for neuromorphic chips that can process time-series data natively without cloud support. Echo state network (ESN) is a class of recurrent neural networks that can be used to identify unique patterns in time-series data and predict future events. It is known for minimal computing resource requirements and fast training, owing to the use of linear optimization solely at the readout stage. In this work, a custom-design neuromorphic chip based on ESN targeting edge devices is proposed. The proposed system supports various learning mechanisms, including structural plasticity and synaptic plasticity, locally on-chip. This provides the network with an additional degree of freedom to continuously learn, adapt, and alter its structure and sparsity level, ensuring high performance and continuous stability. We demonstrate the performance of the proposed system as well as its robustness to noise against real-world time-series datasets while considering various topologies of data movement. An average accuracy of 95.95% and 85.24% are achieved on human activity recognition and prosthetic finger control, respectively. HE last decade has seen significant advancement in neuromorphic computing with a major thrust centered around processing streaming data using recurrent neural networks (RNNs). Despite the fact RNNs demonstrate promising performance in numerous domains including speech recognition [1], computer vision [2], stock trading [3], and medical diagnosis [4], such networks suffer from slow convergence and intensive computations [5]. In order to bypass these challenges, Jaeger and Maass suggest leveraging the rich dynamics offered by the networks' recurrent connections and random parameters and limit the training to the network advanced layers, particularly the readout layer [7]-[9]. With that, the network training and its computation complexity are significantly simplified. There are three classes of RNN networks trained using this approach known as a liquid state machine (LSM) [7], delayed-feedback reservoir [10], [11], and echo state network (ESN) which is going to be the focus of this work. ESN is demonstrated in a variety of tasks, including pattern recognition, anomaly detection [12], spatial-temporal forecasting [13], and modeling dynamic motions in bio-mimic robots [14].


Echo State network for coarsening dynamics of charge density waves

Dinh, Clement, Fan, Yunhao, Chern, Gia-Wei

arXiv.org Artificial Intelligence

An echo state network (ESN) is a type of reservoir computer that uses a recurrent neural network with a sparsely connected hidden layer. Compared with other recurrent neural networks, one great advantage of ESN is the simplicity of its training process. Yet, despite the seemingly restricted learnable parameters, ESN has been shown to successfully capture the spatial-temporal dynamics of complex patterns. Here we build an ESN to model the coarsening dynamics of charge-density waves (CDW) in a semi-classical Holstein model, which exhibits a checkerboard electron density modulation at half-filling stabilized by a commensurate lattice distortion. The inputs to the ESN are local CDW order-parameters in a finite neighborhood centered around a given site, while the output is the predicted CDW order of the center site at the next time step. Special care is taken in the design of couplings between hidden layer and input nodes to ensure lattice symmetries are properly incorporated into the ESN model. Since the model predictions depend only on CDW configurations of a finite domain, the ESN is scalable and transferrable in the sense that a model trained on dataset from a small system can be directly applied to dynamical simulations on larger lattices. Our work opens a new avenue for efficient dynamical modeling of pattern formations in functional electron materials.


Time-Series Forecasting and Sequence Learning Using Memristor-based Reservoir System

Zyarah, Abdullah M., Kudithipudi, Dhireesha

arXiv.org Artificial Intelligence

Pushing the frontiers of time-series information processing in ever-growing edge devices with stringent resources has been impeded by the system's ability to process information and learn locally on the device. Local processing and learning typically demand intensive computations and massive storage as the process involves retrieving information and tuning hundreds of parameters back in time. In this work, we developed a memristor-based echo state network accelerator that features efficient temporal data processing and in-situ online learning. The proposed design is benchmarked using various datasets involving real-world tasks, such as forecasting the load energy consumption and weather conditions. The experimental results illustrate that the hardware model experiences a marginal degradation (~4.8%) in performance as compared to the software model. This is mainly attributed to the limited precision and dynamic range of network parameters when emulated using memristor devices. The proposed system is evaluated for lifespan, robustness, and energy-delay product. It is observed that the system demonstrates a reasonable robustness for device failure below 10%, which may occur due to stuck-at faults. Furthermore, 246X reduction in energy consumption is achieved when compared to a custom CMOS digital design implemented at the same technology node.


Evaluating Echo State Network for Parkinson's Disease Prediction using Voice Features

Hosseininian, Seyedeh Zahra Seyedi, Tajari, Ahmadreza, Ghalehnoie, Mohsen, Alfi, Alireza

arXiv.org Artificial Intelligence

Parkinson's disease (PD) is a debilitating neurological disorder that necessitates precise and early diagnosis for effective patient care. This study aims to develop a diagnostic model capable of achieving both high accuracy and minimizing false negatives, a critical factor in clinical practice. Given the limited training data, a feature selection strategy utilizing ANOVA is employed to identify the most informative features. Subsequently, various machine learning methods, including Echo State Networks (ESN), Random Forest, k-nearest Neighbors, Support Vector Classifier, Extreme Gradient Boosting, and Decision Tree, are employed and thoroughly evaluated. The statistical analyses of the results highlight ESN's exceptional performance, showcasing not only superior accuracy but also the lowest false negative rate among all methods. Consistently, statistical data indicates that the ESN method consistently maintains a false negative rate of less than 8% in 83% of cases. ESN's capacity to strike a delicate balance between diagnostic precision and minimizing misclassifications positions it as an exemplary choice for PD diagnosis, especially in scenarios characterized by limited data. This research marks a significant step towards more efficient and reliable PD diagnosis, with potential implications for enhanced patient outcomes and healthcare dynamics.


Expressive probabilistic sampling in recurrent neural networks

Chen, Shirui, Jiang, Linxing Preston, Rao, Rajesh P. N., Shea-Brown, Eric

arXiv.org Artificial Intelligence

In sampling-based Bayesian models of brain function, neural activities are assumed to be samples from probability distributions that the brain uses for probabilistic computation. However, a comprehensive understanding of how mechanistic models of neural dynamics can sample from arbitrary distributions is still lacking. We use tools from functional analysis and stochastic differential equations to explore the minimum architectural requirements for $\textit{recurrent}$ neural circuits to sample from complex distributions. We first consider the traditional sampling model consisting of a network of neurons whose outputs directly represent the samples (sampler-only network). We argue that synaptic current and firing-rate dynamics in the traditional model have limited capacity to sample from a complex probability distribution. We show that the firing rate dynamics of a recurrent neural circuit with a separate set of output units can sample from an arbitrary probability distribution. We call such circuits reservoir-sampler networks (RSNs). We propose an efficient training procedure based on denoising score matching that finds recurrent and output weights such that the RSN implements Langevin sampling. We empirically demonstrate our model's ability to sample from several complex data distributions using the proposed neural dynamics and discuss its applicability to developing the next generation of sampling-based brain models.


The Power of Linear Recurrent Neural Networks

Stolzenburg, Frieder, Litz, Sandra, Michael, Olivia, Obst, Oliver

arXiv.org Artificial Intelligence

Recurrent neural networks are a powerful means to cope with time series. We show how autoregressive linear, i.e., linearly activated recurrent neural networks (LRNNs) can approximate any time-dependent function f(t) given by a number of function values. The approximation can effectively be learned by simply solving a linear equation system; no backpropagation or similar methods are needed. Furthermore, and this is probably the main contribution of this article, the size of an LRNN can be reduced significantly in one step after inspecting the spectrum of the network transition matrix, i.e., its eigenvalues, by taking only the most relevant components. Therefore, in contrast to other approaches, we do not only learn network weights but also the network architecture. LRNNs have interesting properties: They end up in ellipse trajectories in the long run and allow the prediction of further values and compact representations of functions. We demonstrate this by several experiments, among them multiple superimposed oscillators (MSO), robotic soccer, and predicting stock prices. LRNNs outperform the previous state-of-the-art for the MSO task with a minimal number of units.